Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Insignificant coefficients and significant marginal effects after probit with interaction*

    Dear Statalisters,

    I have a panel data and I am estimating a probit model with an interaction, both of variables are binary (either 0 or 1). I am confused with the fact that the coefficient of the interaction term is insignificant, whereas marginal effects are significant. I am aware that this is because these are two different hypotheses being tested, however, the question I have is: does the insignificant coefficient of interaction somehow obstruct my marginal effects results? Can I safely claim that the probability of y is 19.4% at A=1 and B=1 ?

    Code:
    . probit y ib0.A##ib0.B, cluster(panelvar) allbaselevels
    
    Iteration 0:   log pseudolikelihood = -71.420508  
    Iteration 1:   log pseudolikelihood = -67.999869  
    Iteration 2:   log pseudolikelihood = -67.985199  
    Iteration 3:   log pseudolikelihood = -67.985199  
    
    Probit regression                                 Number of obs   =        140
                                                      Wald chi2(3)    =       8.36
                                                      Prob > chi2     =     0.0391
    Log pseudolikelihood = -67.985199                 Pseudo R2       =     0.0481
    
                                (Std. Err. adjusted for 48 clusters in fundmanagerid)
    ---------------------------------------------------------------------------------
                    |               Robust
                  y |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
    ----------------+----------------------------------------------------------------
                  A |
                 0  |          0  (base)
                 1  |  -.1303176   .3362637    -0.39   0.698    -.7893823     .528747
                    |
                  B |
                 0  |          0  (base)
                 1  |  -.7823049   .2881906    -2.71   0.007    -1.347148   -.2174618
                    |
                A#B |
               0 0  |          0  (base)
               0 1  |          0  (base)
               1 0  |          0  (base)
               1 1  |   .4094472   .4417938     0.93   0.354    -.4564527    1.275347
                    |
              _cons |  -.3584588   .2390024    -1.50   0.134    -.8268948    .1099772
    ---------------------------------------------------------------------------------
    
    . margins A#B
    
    Adjusted predictions                              Number of obs   =        140
    Model VCE    : Robust
    
    Expression   : Pr(y), predict()
    
    ---------------------------------------------------------------------------------
                    |            Delta-method
                    |     Margin   Std. Err.      z    P>|z|     [95% Conf. Interval]
    ----------------+----------------------------------------------------------------
               A#B  |
               0 0  |        .36    .089415     4.03   0.000     .1847498    .5352502
               0 1  |   .1269841    .040389     3.14   0.002     .0478231    .2061451
               1 0  |      .3125   .0925791     3.38   0.001     .1310483    .4939517
               1 1  |   .1944444      .0603     3.22   0.001     .0762586    .3126303
    ---------------------------------------------------------------------------------
    Looking forward for any comments and suggestions,

    Maiia

  • #2
    You are misinterpreting your -margins- output: these are not marginal effects, they are adjusted margins.

    The output that you show simply gives the predicted probability of y in each of the four combinations of values of A and B. The p-values test the hypotheses that the probability of y is zero in each of those conditions. That is clearly a useless hypothesis, the test of which is of no interest to even the most ardent advocates of null hypothesis significance testing. The confidence intervals in that output are probably useful, as in most contexts we are interested in knowing the uncertainty associated with the probabilities of y in each combination. But testing the hypothesis that the probability that y = 0 is basically never of any use. (If I were rewriting the margins command, I would omit the p-value column from that output.)

    If you want marginal effects, the command is:
    Code:
    margins A, dydx(B)
    margins B, dydx(A)
    Now, it is possible that when you run those commands you will encounter a difference in statistical significance between the logistic regression coefficients and the marginal effects, and in that case there is something to discuss about how to interpret them. If that happens and you are unsure what to make of it, post back.

    Comment


    • #3
      Thank you for such an informative response, Clyde.

      Due to the practicalities associated with my data source, I will be able to execute the code that you suggested on the actual data only on Monday, however, I am going to post an output from the bogus data and see if I am able to interpret the outcome correctly:

      Code:
      . margins B, dydx(A)
      
      Conditional marginal effects                    Number of obs     =        105
      Model VCE    : Robust
      
      Expression   : Pr(y), predict()
      dy/dx w.r.t. : 0.A
      
      ------------------------------------------------------------------------------
                   |            Delta-method
                   |      dy/dx   Std. Err.      z    P>|z|     [95% Conf. Interval]
      -------------+----------------------------------------------------------------
          0.A      |
                 B |
                0  |   .1157895    .208291     0.56   0.578    -.2924533    .5240322
                1  |   .0741935   .1077327     0.69   0.491    -.1369588    .2853459
      ------------------------------------------------------------------------------
      Note: dy/dx for factor levels is the discrete change from the base level.
      So is it a correct way to interpret this output: When A changes from the base level (which is 0) to 1 and at B = 0, the change in probability of y is 11.6% and is not significant. When A changes from the base level (which is 0) to 1 and at B = 1, the change in probability of is 7.4% and is not significant.

      Additionally, regarding what you said
      The output that you show simply gives the predicted probability of y in each of the four combinations of values of A and B.
      So, based on the output in my original post in this thread, is it possible to claim, for instance: "The predicted probability of y is higher at A=1 B=1 than at A=0 B=1"? I am asking because I sense that you believe that such result would be uninformative/useless or am I wrong? If I am, it is further possible to claim that "The difference in predicted probability between cases A=1 B=1 and A=0 B=1 is 19.4-12.7=6.7%"?

      Comment


      • #4
        So is it a correct way to interpret this output: When A changes from the base level (which is 0) to 1 and at B = 0, the change in probability of y is 11.6% and is not significant. When A changes from the base level (which is 0) to 1 and at B = 1, the change in probability of is 7.4% and is not significant.
        Almost. The only error in what you say is that the differences are not 11.6% and 7.4%, they are 11.6 percentage points and 7.4 percentage points, respectively. Otherwise, you have it right. To say that the difference is x% would be to say that the ratio of the two would be 1+x/100. The difference (not ratio) between quantities measured in %, is measured in percentage points.

        So, based on the output in my original post in this thread, is it possible to claim, for instance: "The predicted probability of y is higher at A=1 B=1 than at A=0 B=1"? I am asking because I sense that you believe that such result would be uninformative/useless or am I wrong? If I am, it is further possible to claim that "The difference in predicted probability between cases A=1 B=1 and A=0 B=1 is 19.4-12.7=6.7%"?
        This is, again, correct except for the difference between % and percentage points.

        And I think that the output from the original -margins- command is very important. In fact, in my own work, it is as important as, if not more important than, the marginal effects. If my tone sounded skeptical in #2 it was because I wanted to emphasize that you had one thing but were thinking you had something different. I wanted to make sure you understood that the two things are different. What I do think is useless and unimportant is the p-values in the adjusted margins output. But everything else in that table is, in my mind, very important.

        The difference between A = 1 B = 1 and A=0 B=1 is, as you calculate, 6.7 percentage points. When you go in on Monday and run the marginal effects analysis on your real data, you will get this same answer. The reason for doing it by -margins B, dydx(A)- rather than just subtracting the adjusted margins as you have done, is that your calculation does not provide standard errors, confidence intervals or (if you want them) p-values for the result.

        Anyway, you are definitely on the right track here. Keep going!

        Comment


        • #5
          Thanks again Clyde for the excellent comments. The percentage points remark is very important and I am glad you pointed it out. And exactly as you say, after I ran the -margins A#B- command on my bogus data, the difference between A = 1 B = 1 and A=0 B=1 is exactly the same as the marginal effect for B =1 in the output of -margins B, dydx(A)- in #3, which is 7.4 percentage points. Tomorrow I will run this on my actual data and see whether the marginal effects are significant.

          Comment


          • #6
            So the marginal effects are not significant:

            Code:
            . margins B, dydx(A)
            
            Conditional marginal effects                      Number of obs   =        140
            Model VCE    : Robust
            
            Expression   : Pr(y), predict()
            dy/dx w.r.t. : 0.A
            
            ------------------------------------------------------------------------------
                         |            Delta-method
                         |      dy/dx   Std. Err.      z    P>|z|     [95% Conf. Interval]
            -------------+----------------------------------------------------------------
                0.A      |
                       B |
                      0  |      .0475   .1221439     0.39   0.697    -.1918976    .2868976
                      1  |  -.0674603   .0724177    -0.93   0.352    -.2093965    .0744759
            ------------------------------------------------------------------------------
            Note: dy/dx for factor levels is the discrete change from the base level.
            Given these results, can I still claim that the probability of y is 19.4% at A=1 and B=1 and that the difference in predicted probability between cases A=1 B=1 and A=0 B=1 is 6.7 percentage points? I would guess that the latter claim is now incorrect, because of the insignificance of marginal effects, or am I wrong?

            Additionally,

            When I run the same code on the other (smaller) sample, I get a following result:

            Code:
            . probit y ib1.A##i.B, cluster(panelvar) allbaselevels 
            
            Iteration 0:   log pseudolikelihood = -47.687371  
            Iteration 1:   log pseudolikelihood = -43.521809  
            Iteration 2:   log pseudolikelihood = -43.466983  
            Iteration 3:   log pseudolikelihood = -43.466968  
            Iteration 4:   log pseudolikelihood = -43.466968  
            
            Probit regression                                 Number of obs   =         82
                                                              Wald chi2(3)    =       8.29
                                                              Prob > chi2     =     0.0403
            Log pseudolikelihood = -43.466968                 Pseudo R2       =     0.0885
            
                                        (Std. Err. adjusted for 37 clusters in panelvar)
            ---------------------------------------------------------------------------------
                            |               Robust
                          y |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
            ----------------+----------------------------------------------------------------
                          A |
                         0  |  -.8101109   .6372319    -1.27   0.204    -2.059062    .4388406
                         1  |          0  (base)
                            |
                          B |
                         0  |          0  (base)
                         1  |  -.2734166   .3362085    -0.81   0.416    -.9323731    .3855399
                            |
                        A#B |
                       0 0  |          0  (base)
                       0 1  |  -.1421559   .5591922    -0.25   0.799    -1.238152    .9538406
                       1 0  |          0  (base)
                       1 1  |          0  (base)
                            |
                      _cons |  -.1573107   .2522572    -0.62   0.533    -.6517257    .3371043
            ---------------------------------------------------------------------------------
            
            . margins A#B
            
            Adjusted predictions                              Number of obs   =         82
            Model VCE    : Robust
            
            Expression   : Pr(y), predict()
            
            ---------------------------------------------------------------------------------
                            |            Delta-method
                            |     Margin   Std. Err.      z    P>|z|     [95% Conf. Interval]
            ----------------+----------------------------------------------------------------
                        A#B |
                       0 0  |   .1666667   .1259395     1.32   0.186    -.0801703    .4135036
                       0 1  |   .0833333   .0522118     1.60   0.110    -.0189999    .1856665
                       1 0  |      .4375   .0993985     4.40   0.000     .2426825    .6323175
                       1 1  |   .3333333   .0750958     4.44   0.000     .1861482    .4805185
            ---------------------------------------------------------------------------------
            
            . margins B, dydx(A)
            
            Conditional marginal effects                      Number of obs   =         82
            Model VCE    : Robust
            
            Expression   : Pr(y), predict()
            dy/dx w.r.t. : 0.A
            
            ------------------------------------------------------------------------------
                         |            Delta-method
                         |      dy/dx   Std. Err.      z    P>|z|     [95% Conf. Interval]
            -------------+----------------------------------------------------------------
                0.A      |
                       B |
                      0  |  -.2708333   .1855788    -1.46   0.144     -.634561    .0928943
                      1  |       -.25   .0900062    -2.78   0.005    -.4264088   -.0735912
            ------------------------------------------------------------------------------
            Note: dy/dx for factor levels is the discrete change from the base level.
            Here one of the marginal effects and two of the adjusted margins are significant, but the interaction coefficient in probit is not. Again, what can I claim with such a mixture or significances and insignificances?

            Comment


            • #7
              Clyde Schechter : This is just to make a remark on a comment of yours in #2: "If I were rewriting the margins command, I would omit the p-value column from that output".

              I absolutely agree with you about the risk of the results become misleading to the uninformed reader.

              As you also know, there is the option - nopvalues - to omit p-values as well as z statistic, which I personally use as "my default" margins command.


              Code:
              . webuse union
              (NLS Women 14-24 in 1968)
              
              . sample 10
              (23,580 observations deleted)
              
              . probit union age grade i.not_smsa##i.south
              
              Iteration 0:   log likelihood = -1367.2151  
              Iteration 1:   log likelihood = -1339.9797  
              Iteration 2:   log likelihood = -1339.8739  
              Iteration 3:   log likelihood = -1339.8739  
              
              Probit regression                               Number of obs     =      2,620
                                                              LR chi2(5)        =      54.68
                                                              Prob > chi2       =     0.0000
              Log likelihood = -1339.8739                     Pseudo R2         =     0.0200
              
              --------------------------------------------------------------------------------
                       union |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
              ---------------+----------------------------------------------------------------
                         age |   .0026202   .0042802     0.61   0.540    -.0057687    .0110091
                       grade |   .0286378   .0115859     2.47   0.013     .0059298    .0513457
                  1.not_smsa |   -.035876   .0820752    -0.44   0.662    -.1967405    .1249885
                     1.south |  -.3098796   .0698673    -4.44   0.000     -.446817   -.1729423
                             |
              not_smsa#south |
                        1 1  |  -.1257633    .127812    -0.98   0.325    -.3762702    .1247436
                             |
                       _cons |   -1.09174    .190788    -5.72   0.000    -1.465678   -.7178024
              --------------------------------------------------------------------------------
              
              . margins not_smsa##south
              
              Predictive margins                              Number of obs     =      2,620
              Model VCE    : OIM
              
              Expression   : Pr(union), predict()
              
              --------------------------------------------------------------------------------
                             |            Delta-method
                             |     Margin   Std. Err.      z    P>|z|     [95% Conf. Interval]
              ---------------+----------------------------------------------------------------
                    not_smsa |
                          0  |   .2234368   .0095822    23.32   0.000      .204656    .2422176
                          1  |   .2013186     .01516    13.28   0.000     .1716056    .2310316
                             |
                       south |
                          0  |    .256561   .0112203    22.87   0.000     .2345697    .2785524
                          1  |   .1594291   .0114806    13.89   0.000     .1369276    .1819307
                             |
              not_smsa#south |
                        0 0  |    .259907   .0127731    20.35   0.000     .2348721    .2849419
                        0 1  |   .1703804   .0145496    11.71   0.000     .1418638    .1988971
                        1 0  |   .2484391   .0227373    10.93   0.000     .2038749    .2930033
                        1 1  |   .1326698   .0171228     7.75   0.000     .0991096    .1662299
              --------------------------------------------------------------------------------
              
              . margins not_smsa##south, nopvalues
              
              Predictive margins                              Number of obs     =      2,620
              Model VCE    : OIM
              
              Expression   : Pr(union), predict()
              
              ----------------------------------------------------------------
                             |            Delta-method
                             |     Margin   Std. Err.     [95% Conf. Interval]
              ---------------+------------------------------------------------
                    not_smsa |
                          0  |   .2234368   .0095822       .204656    .2422176
                          1  |   .2013186     .01516      .1716056    .2310316
                             |
                       south |
                          0  |    .256561   .0112203      .2345697    .2785524
                          1  |   .1594291   .0114806      .1369276    .1819307
                             |
              not_smsa#south |
                        0 0  |    .259907   .0127731      .2348721    .2849419
                        0 1  |   .1703804   .0145496      .1418638    .1988971
                        1 0  |   .2484391   .0227373      .2038749    .2930033
                        1 1  |   .1326698   .0171228      .0991096    .1662299
              ----------------------------------------------------------------

              If I understood right, your remark reflected a wise critic about the potentially misleading default command.
              Last edited by Marcos Almeida; 04 Dec 2017, 14:36.
              Best regards,

              Marcos

              Comment


              • #8
                Marcos Almeida Actually, I was not aware of the -nopvalues- option! Thank you for telling me about it--I will use it from now on.

                Maiia Sleptcova It is important to remember that the difference between statistically significant and statistically insignificant is not, itself, statistically significant, important, nor even meaningful. Every estimate has to be interpreted in its own right, and in the context of what it means. The fact that some effect is "significant" in one analysis and "insignificant" is never surprising, nor important. (If you want to explicitly contrast the effect in two subpopulations, that is best done with an interaction model and then you can actually estimate the contrast itself.)

                It is also important to remember that statistical significance tells you nothing about the importance, existence, or magnitude of any effects. p-values tell you the extent to which the data disagree with a pre-determined analytic model. A non-significant effect does not thereby become "no effect" or "zero." If you are in the business of estimating effects of things like policies, in fact, you should pay no mind to the p-values at all. You should be interested in the effect estimates themselves and the confidence intervals around them. The confidence intervals will tell you what kind of uncertainty attaches to the effect estimates. When an effect is not statistically significant, the implication is that the uncertainty is sufficiently great that you cannot even be sure what direction it goes in. But that does not mean there is no effect or that the effect is zero. It just means that it has been very coarsely estimated by your study and that, if it's really important to know more about this effect, a better study will need to be done to pin it down more closely.

                The situation in non-linear models is particularly complicated. The probit regression (and a logit does the same thing) estimates a single effect for the entire data set. That effect is in a very unintuitive metric, the inverse normal probability distribution function metric, that few people find sensible. (With a logit it's in the log-odds or odds-ratio metric, which is, by contrast, fairly easy to grasp.) When you have a constant marginal effect of something in one of these metrics, you necessarily have a varying marginal effect in the probability metric. That is, the marginal effect on probabilities depends on the baseline probability. And, in fact, you can pretty much always find both significant and non-significant effects on probabilities from the same model depending on the baseline model. In fact, in both logit and probit models, no matter how large the effect is in the normal distribution or log-odds metrics (probit/logit coefficients) you can always find arbitrarily small probability metric marginal effects if you look at a sufficiently extreme (close to zero or one) baseline probability. This is the result of the non-linearity of the logit and probit link functions. In both cases, the curves become very flat near probabilities of zero and one, so even a very large change in the logit or probit themselves can correspond to a minuscule change in probability.

                Some people, for this reason, prefer to focus on the regression coefficients, despite the fact that they are someowhat difficult (logit) or nearly impossible (probit) to interpret and explain.

                If, however, you are trying to assess a policy decision, you cannot really avoid the probability metric. That's because any decision analysis has to look at expected utility, and that is calculated in terms of probabilities of outcomes, not log odds, and certainly not probits. That is the framework I normally use in my work, so I tend to disregard the regression coefficients and rely on what the -margins- outputs tell me.

                If I were you, I would report these marginal effects along with their confidence intervals. I would not report the p-values unless somebody with power of your requires you to. I would interpret these effects in terms of their practical importance in whatever field you are studying here, and I would be sure to emphasize the degree to which your estimates are precise (narrow confidence limits) or imprecise (wide confidence limits).

                Comment


                • #9
                  See this earlier thread for an extended discussion of similar issues:

                  https://www.statalist.org/forums/for...s-significance

                  I probably wouldn't go so far as to omit the p-values from the margins output. But, I don't pay that much attention to them. I am much more interested in the p-values of the original coefficients. I mostly use margins because I think it is an aid to interpretation.
                  -------------------------------------------
                  Richard Williams, Notre Dame Dept of Sociology
                  Stata Version: 17.0 MP (2 processor)

                  EMAIL: [email protected]
                  WWW: https://www3.nd.edu/~rwilliam

                  Comment


                  • #10
                    I read somewhere (hopefully I’ll find the source, for I found convincing enough) about, say, a two step approach: checking coefs and p-values of the probit model, then getting the margins and marginsplot ‘with an eye’ on the ‘culprit’ predictors.

                    Clyde Schechter it’s quite a challenge, informing to you something ‘new’ whatsoever concerning margins command. 😃
                    Best regards,

                    Marcos

                    Comment


                    • #11
                      Thanks everyone for your comments.

                      I am not that concerned with the significance or insignificance per se, but rather confused how (and if) I can interpret my marginal effects and adjusted margins results when the coefficients for probit regression are insignificant. The result that I am expecting to get is that probability of y is larger at A=1 B=1 than at A=0 B=1, which I seem to get from the marginal effects and adjusted margins output, but my probit regression coefficients are insignificant. I understand that statistical significance is generic and arbitrary, however, I am just worried that if I, based on my marginal effects output, will claim that y is larger at A=1 B=1 than at A=0 B=1, someone will point a finger at me saying: 'oh, but can you actually claim that if your probit coefficients are insignificant?' For example Richard Williams tells us here, and also in the discussion he provided the link to, that he is primarily interested in p-values if the original coefficients. I have them insignificant. What does that mean for my marginal effects and adjusted margins? i.e. can I still go on and interpret them independently (in a sense that they can be interpreted in their own right) from the probit coefficients? Or does the insignificance of probit coefficients sort of annul or obstruct the results from -margins-? From Clyde Schechter 's post #8 I understood that they indeed can be looked at independently, did I get it right?

                      Comment


                      • #12
                        With regards to the "approach" to the non-significant p-values of the probit model, I gather they should be taken as you'd take (under frequentist terms) in any sort of regression. With regards to the p-values displayed after - margins - command, much has been said.

                        To end, considering the output shown in #1 and #6, I believe you could give the interaction term a pass, so to speak.
                        Best regards,

                        Marcos

                        Comment


                        • #13
                          Thanks for clarifying that, Marcos.

                          Comment


                          • #14
                            Since I again have doubts about this topic that I raised earlier in December, I decided I am going to ask the question again: do insignificant probit coefficients hinder my margins interpretation. So far I have got following opinions:
                            - Marcos Almeida in this post under #12 thinks I
                            could give the interaction term a pass
                            because of the insignificant probit coefficient.
                            - In this https://www.statalist.org/forums/for...s-after-mlogit post under #2 Stephen Jenkins says that
                            In short, I don't think that eyeballing the p-value on your interaction term in a non-linear model is necessarily informative about the statistical significance of cross-partial effects of the sort you may wish to calculate using margins
                            - Richard Williams says also in this post in #9 that he is
                            more interested in the p-values of the original coefficients.
                            - From Clyde Schechter's answer #8 I inferred that I can take the liberty to look at the marginal effects independently despite the probit coefficients insignificance, but I am not 100% sure I got it right.

                            So all in all I have gathered contradicting opinions on the matter. Disclaimer: I am aware they are different hypotheses being tested, and I am more interested in marginal effects, however, I want to know whether there is no mistake in not paying attention to the insignificance of probit coefficients and solely interpreting marginal effects.

                            Thanks in advance for any input.

                            Comment


                            • #15
                              Clyde Schechter
                              [/QUOTE]Now, it is possible that when you run those commands you will encounter a difference in statistical significance between the logistic regression coefficients and the marginal effects, and in that case there is something to discuss about how to interpret them. If that happens and you are unsure what to make of it, post back.[/QUOTE]

                              Hi, all.
                              Hope everyone is well.
                              I came across this old post and I am having a slightly similar problem with what you are describing above.
                              More specifically, I run a poisson regression where my main variables of interest is a continuous variable rcahome and rcahost (x1 and x2 accordingly). I have a three dimensional data with year, countries and industries.
                              I have created 3-way interaction terms with two categorical variables named: i.OBOR Time and OBOR Country which is respectively time and country dummies

                              ]poisson Noofinvestements c3 c5 c6 c7 c8 c9 c10 i.OBORTime##i.OBORCountry##c.rcahome i.OBORTime##i.OBORCountry##c.rcahost i.industry, vce(robust).

                              I want to understand the effect of the interaction terms and for this reason I am also computing the ME of my primary independent variable(rcahome) over the two values of my moderating variables(OBOR Time and OBOR Country) with the below commands:


                              margins OBORCountry, dydx(rcahome)
                              margins r.OBORCountry, dydx(rcahome)
                              margins OBORTime, dydx(rcahome)
                              margins r.OBORTime, dydx(rcahome)

                              My confusion lies on the fact that when I run the poisson regression the 3 way-interaction terms, OBOR Time *OBOR Country* rcahome is statistically significant with the negative sing.
                              However, when I compute the margins and margins contrast command the sign changes.

                              Code:
                               margins OBORCountry, dydx(rcahome)
                              
                              Average marginal effects                        Number of obs     =     16,393
                              Model VCE    : Robust
                              
                              Expression   : Predicted number of events, predict()
                              dy/dx w.r.t. : rcahome
                              
                              ------------------------------------------------------------------------------
                                           |            Delta-method
                                           |      dy/dx   Std. Err.      z    P>|z|     [95% Conf. Interval]
                              -------------+----------------------------------------------------------------
                              rcahome      |
                               OBORCountry |
                                        0  |  -.0103857   .0117027    -0.89   0.375    -.0333225    .0125511
                                        1  |   .0327451   .0084978     3.85   0.000     .0160897    .0494004
                              ------------------------------------------------------------------------------
                              Code:
                               margins r.OBORCountry, dydx(rcahome)
                              
                              Contrasts of average marginal effects
                              Model VCE    : Robust
                              
                              Expression   : Predicted number of events, predict()
                              dy/dx w.r.t. : rcahome
                              
                              ------------------------------------------------
                                           |         df        chi2     P>chi2
                              -------------+----------------------------------
                              rcahome      |
                               OBORCountry |          1       10.73     0.0011
                              ------------------------------------------------
                              
                              --------------------------------------------------------------
                                           |   Contrast Delta-method
                                           |      dy/dx   Std. Err.     [95% Conf. Interval]
                              -------------+------------------------------------------------
                              rcahome      |
                               OBORCountry |
                                 (1 vs 0)  |   .0431308   .0131654      .0173271    .0689344
                              --------------------------------------------------------------

                              May I also, ask how we interpret results when the estimated coefficient of an interaction term is statistically insignificant and the ME for some values of the moderating variables is statistically significant.

                              Thank you very much for your time and help.

                              Kind regards,
                              Lida

                              Comment

                              Working...
                              X